111 research outputs found
Local matching indicators for transport with concave costs
In this note, we introduce a class of indicators that enable to compute
efficiently optimal transport plans associated to arbitrary distributions of
demands and supplies in in the case where the cost
function is concave. The computational cost of these indicators is small and
independent of . A hierarchical use of them enables to obtain an efficient
algorithm
A Wasserstein-type distance in the space of Gaussian Mixture Models
In this paper we introduce a Wasserstein-type distance on the set of Gaussian mixture models. This distance is defined by restricting the set of possible coupling measures in the optimal transport problem to Gaussian mixture models. We derive a very simple discrete formulation for this distance, which makes it suitable for high dimensional problems. We also study the corresponding multi-marginal and barycenter formulations. We show some properties of this Wasserstein-type distance, and we illustrate its practical use with some examples in image processing
A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging
Recently, impressive denoising results have been achieved by Bayesian
approaches which assume Gaussian models for the image patches. This improvement
in performance can be attributed to the use of per-patch models. Unfortunately
such an approach is particularly unstable for most inverse problems beyond
denoising. In this work, we propose the use of a hyperprior to model image
patches, in order to stabilize the estimation procedure. There are two main
advantages to the proposed restoration scheme: Firstly it is adapted to
diagonal degradation matrices, and in particular to missing data problems (e.g.
inpainting of missing pixels or zooming). Secondly it can deal with signal
dependent noise models, particularly suited to digital cameras. As such, the
scheme is especially adapted to computational photography. In order to
illustrate this point, we provide an application to high dynamic range imaging
from a single image taken with a modified sensor, which shows the effectiveness
of the proposed scheme.Comment: Some figures are reduced to comply with arxiv's size constraints.
Full size images are available as HAL technical report hal-01107519v5, IEEE
Transactions on Computational Imaging, 201
Stochastic Modeling and Resolution-Free Rendering of Film Grain
The realistic synthesis and rendering of film grain is a crucial goal for many amateur and professional photographers and film-makers whose artistic works require the authentic feel of analog photography. The objective of this work is to propose an algorithm that reproduces the visual aspect of film grain texture on any digital image. Previous approaches to this problem either propose unrealistic models or simply blend scanned images of film grain with the digital image, in which case the result is inevitably limited by the quality and resolution of the initial scan. In this work, we introduce a stochastic model to approximate the physical reality of film grain, and propose a resolution-free rendering algorithm to simulate realistic film grain for any digital input image. By varying the parameters of this model, we can achieve a wide range of grain types. We demonstrate this by comparing our results with film grain examples from dedicated software, and show that our rendering results closely resemble these real film emulsions. In addition to realistic grain rendering, our resolution-free algorithm allows for any desired zoom factor, even down to the scale of the microscopic grains themselves
FastDVDnet: Towards Real-Time Video Denoising Without Explicit Motion Estimation
In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture. Until recently, video denoising with neural networks had been a largely under explored domain, and existing methods could not compete with the performance of the best patch-based methods. The approach we introduce in this paper, called FastDVDnet, shows similar or better performance than other state-of-the-art competitors with significantly lower computing times. In contrast to other existing neural network denoisers, our algorithm exhibits several desirable properties such as fast run-times, and the ability to handle a wide range of noise levels with a single network model. The characteristics of its architecture make it possible to avoid using a costly motion compensation stage while achieving excellent performance. The combination between its denoising performance and lower computational load makes this algorithm attractive for practical denoising applications. We compare our method with different state-of-art algorithms, both visually and with respect to objective quality metrics
DVDnet: A Fast Network for Deep Video Denoising
In this paper, we propose a state-of-the-art video denoising algorithm based
on a convolutional neural network architecture. Previous neural network based
approaches to video denoising have been unsuccessful as their performance
cannot compete with the performance of patch-based methods. However, our
approach outperforms other patch-based competitors with significantly lower
computing times. In contrast to other existing neural network denoisers, our
algorithm exhibits several desirable properties such as a small memory
footprint, and the ability to handle a wide range of noise levels with a single
network model. The combination between its denoising performance and lower
computational load makes this algorithm attractive for practical denoising
applications. We compare our method with different state-of-art algorithms,
both visually and with respect to objective quality metrics. The experiments
show that our algorithm compares favorably to other state-of-art methods. Video
examples, code and models are publicly available at
\url{https://github.com/m-tassano/dvdnet}
Properties of Discrete Sliced Wasserstein Losses
The Sliced Wasserstein (SW) distance has become a popular alternative to the
Wasserstein distance for comparing probability measures. Widespread
applications include image processing, domain adaptation and generative
modelling, where it is common to optimise some parameters in order to minimise
SW, which serves as a loss function between discrete probability measures
(since measures admitting densities are numerically unattainable). All these
optimisation problems bear the same sub-problem, which is minimising the Sliced
Wasserstein energy. In this paper we study the properties of , i.e. the SW distance between
two uniform discrete measures with the same amount of points as a function of
the support of one of the measures. We
investigate the regularity and optimisation properties of this energy, as well
as its Monte-Carlo approximation (estimating the expectation in
SW using only samples) and show convergence results on the critical points
of to those of , as well as an almost-sure uniform
convergence. Finally, we show that in a certain sense, Stochastic Gradient
Descent methods minimising and converge towards
(Clarke) critical points of these energies
- …